ai research
AAAI presidential panel – AI reasoning
In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team, and other selected AI practitioners, are taking part in a series of video panel discussions covering selected chapters from the report. In the third panel, the AI experts tackle the topic of AI reasoning. They consider the definition of reasoning, what reasoning is and what it should be in our AI models, planning techniques, model training, making smart (and not to smart choices) about which AI products to use, guarantees, why we shouldn't imitate human reasoning in AI models, thinking about the future, and more.
- North America > United States > Arizona (0.06)
- Europe > Netherlands > South Holland > Leiden (0.06)
- Europe > Germany (0.06)
AAAI 2025 presidential panel on the future of AI research – video discussion on AGI
In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team are taking part in a series of video panel discussions covering selected chapters from the report. In the first panel, the AI experts tackled the considerations around artificial general intelligence (AGI) development. AIhub is dedicated to free high-quality information about AI.
- Oceania > Australia (0.06)
- North America > United States > California > Alameda County > Berkeley (0.06)
- North America > Canada (0.06)
- (3 more...)
Big Tech-Funded AI Papers Have Higher Citation Impact, Greater Insularity, and Larger Recency Bias
Gnewuch, Max Martin, Wahle, Jan Philip, Ruas, Terry, Gipp, Bela
Over the past four decades, artificial intelligence (AI) research has flourished at the nexus of academia and industry. However, Big Tech companies have increasingly acquired the edge in computational resources, big data, and talent. So far, it has been largely unclear how many papers the industry funds, how their citation impact compares to non-funded papers, and what drives industry interest. This study fills that gap by quantifying the number of industry-funded papers at 10 top AI conferences (e.g., ICLR, CVPR, AAAI, ACL) and their citation influence. We analyze about 49.8K papers, about 1.8M citations from AI papers to other papers, and about 2.3M citations from other papers to AI papers from 1998-2022 in Scopus. Through seven research questions, we examine the volume and evolution of industry funding in AI research, the citation impact of funded papers, the diversity and temporal range of their citations, and the subfields in which industry predominantly acts. Our findings reveal that industry presence has grown markedly since 2015, from less than 2 percent to more than 11 percent in 2020. Between 2018 and 2022, 12 percent of industry-funded papers achieved high citation rates as measured by the h5-index, compared to 4 percent of non-industry-funded papers and 2 percent of non-funded papers. Top AI conferences engage more with industry-funded research than non-funded research, as measured by our newly proposed metric, the Citation Preference Ratio (CPR). We show that industry-funded research is increasingly insular, citing predominantly other industry-funded papers while referencing fewer non-funded papers. These findings reveal new trends in AI research funding, including a shift towards more industry-funded papers and their growing citation impact, greater insularity of industry-funded work than non-funded work, and a preference of industry-funded research to cite recent work.
- North America > United States (0.28)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (2 more...)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Applied AI (0.67)
Artificial intelligence research has a slop problem, academics say: 'It's a mess'
The author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers. The author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers. Artificial intelligence research has a slop problem, academics say: 'It's a mess' AI research in question as author claims to have written over 100 papers on AI that one expert calls a'disaster' A single person claims to have authored 113 academic papers on artificial intelligence this year, 89 of which will be presented this week at one of the world's leading conference on AI and machine learning, which has raised questions among computer scientists about the state of AI research. Zhu himself graduated from high school in 2018. Papers he has put out in the past two years cover subjects like using AI to locate nomadic pastoralists in sub-Saharan Africa, to evaluate skin lesions, and to translate Indonesian dialects.
- Africa > Sub-Saharan Africa (0.25)
- Oceania > Australia (0.05)
- North America > United States > Virginia (0.05)
- (3 more...)
- Leisure & Entertainment > Sports (0.70)
- Education > Educational Setting > K-12 Education > Secondary School (0.35)
Irresponsible AI: big tech's influence on AI research and associated impacts
Hernandez-Garcia, Alex, Volokhova, Alexandra, Williams, Ezekiel, Kabakibo, Dounia Shaaban
The accelerated development, deployment and adoption of artificial intelligence systems has been fuelled by the increasing involvement of big tech. This has been accompanied by increasing ethical concerns and intensified societal and environmental impacts. In this article, we review and discuss how these phenomena are deeply entangled. First, we examine the growing and disproportionate influence of big tech in AI research and argue that its drive for scaling and general-purpose systems is fundamentally at odds with the responsible, ethical, and sustainable development of AI. Second, we review key current environmental and societal negative impacts of AI and trace their connections to big tech and its underlying economic incentives. Finally, we argue that while it is important to develop technical and regulatory approaches to these challenges, these alone are insufficient to counter the distortion introduced by big tech's influence. We thus review and propose alternative strategies that build on the responsibility of implicated actors and collective action.
- North America > Canada > Quebec > Montreal (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- Asia > Middle East > Israel (0.05)
- (5 more...)
- Overview (1.00)
- Research Report (0.82)
- Law (1.00)
- Energy (0.88)
- Information Technology > Services (0.69)
- Government > Military (0.46)
Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor
Olteanu, Alexandra, Blodgett, Su Lin, Balayn, Agathe, Wang, Angelina, Diaz, Fernando, Calmon, Flavio du Pin, Mitchell, Margaret, Ekstrand, Michael, Binns, Reuben, Barocas, Solon
In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about the capabilities of AI systems. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders.
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Jordan (0.04)
- Government (1.00)
- Law (0.93)
- Health & Medicine > Therapeutic Area (0.67)
DeepSeek may have found a new way to improve AI's ability to remember
An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI's ability to "remember." Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek's new model performs on par with top models on key benchmarks. But researchers say the model's main innovation lies in how it processes information--specifically, how it stores and retrieves memories. Improving how AI models "remember" information could reduce the computing power they need to run, thus mitigating AI's large (and growing) carbon footprint.
- Asia > India (0.05)
- North America > United States > Massachusetts (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.05)
Reproducibility: The New Frontier in AI Governance
Mason-Williams, Israel, Mason-Williams, Gabryel
AI policymakers are responsible for delivering effective governance mechanisms that can provide safe, aligned and trustworthy AI development. However, the information environment offered to policymakers is characterised by an unnecessarily low Signal-To-Noise Ratio, favouring regulatory capture and creating deep uncertainty and divides on which risks should be prioritised from a governance perspective. We posit that the current publication speeds in AI combined with the lack of strong scientific standards, via weak reproducibility protocols, effectively erodes the power of policymakers to enact meaningful policy and governance protocols. Our paper outlines how AI research could adopt stricter reproducibility guidelines to assist governance endeavours and improve consensus on the AI risk landscape. We evaluate the forthcoming reproducibility crisis within AI research through the lens of crises in other scientific domains; providing a commentary on how adopting preregistration, increased statistical power and negative result publication reproducibility protocols can enable effective AI governance. While we maintain that AI governance must be reactive due to AI's significant societal implications we argue that policymakers and governments must consider reproducibility protocols as a core tool in the governance arsenal and demand higher standards for AI research. Code to replicate data and figures: https://github.com/IFMW01/reproducibility-the-new-frontier-in-ai-governance
- Asia > Middle East > Israel (0.40)
- North America > United States (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (2 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
- Government (0.95)
- Banking & Finance > Economy (0.93)
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Therapeutic Area > Oncology (0.47)
The Collaborations among Healthcare Systems, Research Institutions, and Industry on Artificial Intelligence Research and Development
Ye, Jiancheng, Ma, Michelle, Abuhashish, Malak
Objectives: The integration of Artificial Intelligence (AI) in healthcare promises to revolutionize patient care, diagnostics, and treatment protocols. Collaborative efforts among healthcare systems, research institutions, and industry are pivotal to leveraging AI's full potential. This study aims to characterize collaborative networks and stakeholders in AI healthcare initiatives, identify challenges and opportunities within these collaborations, and elucidate priorities for future AI research and development. Methods: This study utilized data from the Chinese Society of Radiology and the Chinese Medical Imaging AI Innovation Alliance. A national cross-sectional survey was conducted in China (N = 5,142) across 31 provincial administrative regions, involving participants from three key groups: clinicians, institution professionals, and industry representatives. The survey explored diverse aspects including current AI usage in healthcare, collaboration dynamics, challenges encountered, and research and development priorities. Results: Findings reveal high interest in AI among clinicians, with a significant gap between interest and actual engagement in development activities. Despite the willingness to share data, progress is hindered by concerns about data privacy and security, and lack of clear industry standards and legal guidelines. Future development interests focus on lesion screening, disease diagnosis, and enhancing clinical workflows. Conclusion: This study highlights an enthusiastic yet cautious approach toward AI in healthcare, characterized by significant barriers that impede effective collaboration and implementation. Recommendations emphasize the need for AI-specific education and training, secure data-sharing frameworks, establishment of clear industry standards, and formation of dedicated AI research departments.
- Asia > China (0.25)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Texas (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.94)
Position: We Need Responsible, Application-Driven (RAD) AI Research
Hartman, Sarah, Ong, Cheng Soon, Powles, Julia, Kuhnert, Petra
This position paper argues that achieving meaningful scientific and societal advances with artificial intelligence (AI) requires a responsible, application-driven approach (RAD) to AI research. As AI is increasingly integrated into society, AI researchers must engage with the specific contexts where AI is being applied. This includes being responsive to ethical and legal considerations, technical and societal constraints, and public discourse. We present the case for RAD-AI to drive research through a three-staged approach: (1) building transdisciplinary teams and people-centred studies; (2) addressing context-specific methods, ethical commitments, assumptions, and metrics; and (3) testing and sustaining efficacy through staged testbeds and a community of practice. We present a vision for the future of application-driven AI research to unlock new value through technically feasible methods that are adaptive to the contextual needs and values of the communities they ultimately serve.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > Australia > Western Australia (0.04)
- Oceania > Australia > Northern Territory > Darwin (0.04)
- (6 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- Food & Agriculture > Agriculture (0.93)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Applied AI (0.93)